Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH, REF] Add uncombined support and modularize multi-file renamers #424

Merged
merged 13 commits into from
May 13, 2020

Conversation

tsalo
Copy link
Member

@tsalo tsalo commented Feb 24, 2020

Closes #423. I think that any other multi-file situations that operate similarly to multi-echo and uncombined data could also be dealt with in a similar manner.

Changes proposed:

  • Move multi-echo file renaming into a separate function.
  • Make magnitude/phase file renaming general (not just SBRefs).
  • Move magnitude/phase file renaming into a separate function.
  • Add support for uncombined data with _channel-<index> entity (not currently in specification).
  • Load metadata once for the file renaming step and generate booleans regarding multi-echo, complex, and uncombined status, before calling the new functions.
  • Change locations of inserted entities in filenames based on Entity Table from current specification.

@codecov
Copy link

codecov bot commented Feb 24, 2020

Codecov Report

Merging #424 into master will increase coverage by 1.04%.
The diff coverage is 95.41%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #424      +/-   ##
==========================================
+ Coverage   74.99%   76.03%   +1.04%     
==========================================
  Files          35       36       +1     
  Lines        2863     2946      +83     
==========================================
+ Hits         2147     2240      +93     
+ Misses        716      706      -10     
Impacted Files Coverage Δ
heudiconv/convert.py 84.90% <91.93%> (+5.47%) ⬆️
heudiconv/tests/test_convert.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 75f2850...1dd72bf. Read the comment docs.

@yarikoptic
Copy link
Member

Awesome, thanks!

  • Add support for uncombined data with _channel-<index> entity (not currently in specification).

Ideally, there should be a PR against BIDS spec introducing this, referencing existing BEPs which might already be taking advantage of it. E.g., I could see people suggesting to use _ch (instead of longer _channel-) since it is IMHO a common abbreviation for it.

Also, we do need to figure out how to test all the refactored/added functionality. I probably can collect some phantom data here, just need to know "how" and either we have representative sequences.

@tsalo
Copy link
Member Author

tsalo commented Feb 24, 2020

Ideally, there should be a PR against BIDS spec introducing this, referencing existing BEPs which might already be taking advantage of it. E.g., I could see people suggesting to use _ch (instead of longer _channel-) since it is IMHO a common abbreviation for it.

I have bids-standard/bids-specification#370 open, but I'd be happy to open a pull request as well. I'm just not sure if it will be reviewed.

Also, we do need to figure out how to test all the refactored/added functionality. I probably can collect some phantom data here, just need to know "how" and either we have representative sequences.

I don't have enough access to my institution's scanner to collect phantom data, but I can share the PDF for the SWI protocol we're using that has uncombined, complex, multi-echo data (all three of the functions I'm proposing here!). See attached (swi_acq-qsm_run-01.pdf).

@yarikoptic
Copy link
Member

we don't have SWI sequences here... But now I recalled that I have some sample dataset with bunch of SWIs on phantom, I am checking if it is shareable. It might be a great use case, it is just too big :-( (3GB of DICOMs) so we might to prepare truncated version for any sensibly fast testing

@pvelasco
Copy link
Contributor

I can collect a phantom dataset with that protocol right now, if it helps.
Would it be OK if I reduce the "base resolution" from 256 to, say, 64, to reduce file size?
I would still leave all other parameters untouched.

@yarikoptic
Copy link
Member

yes!
the smaller data (without loosing valuable characteristics, e.g. multiple TRs where applicable) the better. We might at the end trim it anyways to include e.g. only a few channels out of all (32? 64?) available, etc.

@yarikoptic
Copy link
Member

But now I recalled that I have some sample dataset with bunch of SWIs on phantom,

I've checked -- I was wrong, was not done on a phantom :-(

@pvelasco
Copy link
Contributor

I got the dataset.
I'm trying to figure out how to share via datalad. I'm totally new to this, so I'm not being successful.
I'm trying to add it to the dataset you created for me here
So far, I did:

cd /tmp
datalad install https://datasets.datalad.org/dicoms/velasco
cd velasco
datalad create --description "SWI sample dataset" -c text2git SWI

Then, I copied the data there.
What now? I'm trying to read the datalad documentation, but it's a little bit confusing.

@tsalo
Copy link
Member Author

tsalo commented Feb 25, 2020

I've opened bids-standard/bids-specification#425, which uses ch-<index> (because I liked @yarikoptic's idea), and would be happy to update this PR to change the entity.

@yarikoptic
Copy link
Member

I got the dataset.
I'm trying to figure out how to share via datalad. I'm totally new to this, so I'm not being successful.
I'm trying to add it to the dataset you created for me here
So far, I did:

cd /tmp
datalad install https://datasets.datalad.org/dicoms/velasco
cd velasco
datalad create --description "SWI sample dataset" -c text2git SWI

AWESOME! The only thing done better could have been if you added -d . in the last create command. That would have right away registered SWI as subdataset of your local clone of velasco. Now you would need to run datalad save -d . -m "Added SWI as subdataset" SWI in velasco.

Well, it is not required per se, but if you wanted to make your fork of velasco superdatset properly contain it -- that could be the way. Later though I would add it on datasets.datalad.org and you would have run into conflicts when you do datalad update -r --merge there ;)

Then, I copied the data there.
What now?

Run datalad save -m "Added my SWI data" to add all the data into git-annex. And next step would be to "publish" that dataset somewhere. Git repo could go to GitHub, data -- there is way too many choices, so let's try just few:

I'm trying to read the datalad documentation, but it's a little bit confusing.

Sorry about that. Let me guide you a bit. The best user-oriented documentation ATM is the http://handbook.datalad.org . For this particular aspect, see http://handbook.datalad.org/en/latest/basics/101-138-sharethirdparty.html#leveraging-third-party-infrastructure .

Alternatives (to what is presented in the chapter as of today):

  • GitHub LFS annex special remote: I have submitted ENH: various tune ups and extension to the section on special remotes datalad-handbook/book#402 which extends it a bit with additional example workflow -- GitHub LFS. See this particular commit with 2 step instructions. Limitation - total size/traffic for LFS for you would be 1GB. But hopefully your dataset is notably smaller
  • That section references export-to-figshare but doesn't provide example. But it should be quite straightforward -- just run datalad export-to-figshare, follow interactive questions, then go to your account on figshare, enter some better description etc, and "publish" that dataset publicly on GitHub. Then publish your dataset (git/git-annex components) on github - and I would be able to datalad install and get all the data.

@pvelasco
Copy link
Contributor

Second attempt to make the dataset available via Datalab:

myData=SWI
cd /tmp

# get my datalad dataset:
datalad install https://datasets.datalad.org/dicoms/velasco
cd velasco

# create a new dataset:
datalad create -d . --description "SWI sample dataset" -c text2git $myData
cd $myData

# Setup git-annex to use Box.com:
WEBDAV_USERNAME='[email protected]' \
WEBDAV_PASSWORD='xxxxxxx' \
git annex initremote box.com type=webdav url=https://dav.box.com/dav/git-annex chunk=50mb encryption=none
datalad create-sibling-github -d . SWIdataset --publish-depends box.com

# check siblings:
datalad siblings
# The output:
# .: here(+) [git]
# .: github(-) [https://github.com/pvelasco/SWIdataset.git (git)]
# .: box.com(+) [git]


# Now, copy the data:
cp -r /tmp/$myData/* ./
datalad save -m "Added my SWI data"
datalad status
# (it shows no untracked files/folders)

# publish the data:
datalad publish --to box.com --transfer-data all
datalad publish --to github --transfer-data all

At this point, I have a new GitHub repository with the tree structure of my dataset. The dcm files are just some symbolic links (to my git-annex Box.com folder, I expect).
However, nothing has been uploaded to the git-annex folder.
Do I have to upload the files to Box.com by hand?

@yarikoptic
Copy link
Member

oh, we totally stole this PR with all the datalad... heh, sorry about that @tsalo.

Was there output from datalad publish --to box.com --transfer-data all? You can use now git annex list (or git annex whereis) to see where any particular annexed file is known to be available from. If they aren't on box.com, you can also try direct call to git annex copy --to=box.com if those files did not make it to your box.com upon datalad publish.
I had played with box.com so long ago that cannot recall if published files would be made publicly available... we will see I guess.

@pvelasco
Copy link
Contributor

Sorry for the delay (it took a long while to upload all the 2K+ files to Box).
Here is the GitHub repo with the data:
https://github.com/pvelasco/SWIdataset.git
I tried to upload it to https://datasets.datalad.org, but I don't have an account there. Maybe somebody can put it in dicoms/velasco/.

(@yarikoptic , if it is OK, I can delete all my comments related to uploading the data, to clean up the PR.)

@yarikoptic
Copy link
Member

box.com remote would be private I believe, i.e. not really accessible without users authenticating in box.com... found some old account there. Could you make sure annexed files are available to [email protected] ? I think you might need to add me to collaborators or something.
To make sure -- is it ok to (re)share them from datasets.datalad.org? ;)

@yarikoptic
Copy link
Member

size of annexed files in working tree: 461.03 megabytes

would have probably made a nice <50M .tar.gz posted to figshare ;)

@pvelasco
Copy link
Contributor

box.com remote would be private I believe, i.e. not really accessible without users authenticating in box.com... found some old account there. Could you make sure annexed files are available to [email protected] ? I think you might need to add me to collaborators or something.

I made the git-annex folder available to anybody with the link. I'm not sure if they force you to have a box.com account. Can you try with your onerussian.com account?

To make sure -- is it ok to (re)share them from datasets.datalad.org? ;)

Yes, totally.

@yarikoptic
Copy link
Member

I made the git-annex folder available to anybody with the link

but what is the link? ;) the underlying problem is that git annex only knows that

$> git show git-annex:remote.log
8de219c1-d6a9-4154-8e60-929cd3383869 chunk=50mb encryption=none name=box.com type=webdav url=https://dav.box.com/dav/git-annex timestamp=1582900942.887371722s

i.e that it is available from https://dav.box.com/dav/git-annex -- not linked to any account. I provide my credentials, I have my own git-annex folder there. I would need to rename mine aside, and have get yours somehow made visible in my space.

@pvelasco
Copy link
Contributor

pvelasco commented Mar 3, 2020

The link for the git-annex is:
https://nyu.box.com/s/6iwc0xumzqilhwwdxrmwaesbqdl58lqy

Comment on lines 667 to 676
# Determine if data are complex (magnitude + phase)
magnitude_found = any(['M' in it for it in image_types])
phase_found = any(['P' in it for it in image_types])
is_complex = magnitude_found and phase_found
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this should address #421.

@yarikoptic
Copy link
Member

Woohoo -- FWIW, this branch (without current 0.7.0 master merged) is (more) correctly handling multi-echo PDT1w bundle (see #346 for more info/discussion):

it properly adds _echo- entities to files
$> heudiconv/utils/test-compare-two-versions.sh heudiconv{-master,-424} --bids -f reproin --files /home/yoh/datalad/dicoms/dartmouth-phantoms/bids_test6-PD+T2w/005-anat-PD+T2w_acq-tse-tra-3mm
Not running heudiconv since compare-versions/v0.7.0-1-g57c9c56 already exists
Running /home/yoh/picts/mris/heudiconv-424/venvs/dev3/bin/heudiconv with log in compare-versions/v0.6.0-20-g788bd96.log
Results differ: /home/yoh/picts/mris/compare-versions/v0.7.0-1-g57c9c56-v0.6.0-20-g788bd96.diff
 .heudiconv/qa/info/dicominfo.tsv                        |    4 
 .heudiconv/qa/info/heuristic.py                         |   64 
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_PD+T2w1.json         |  114 
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_PD+T2w1.nii.gz       |34146 -------------------------------------------------------------------------------------------------------------------------------------------------------------------
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_PD+T2w2.json         |  114 
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_PD+T2w2.nii.gz       |32279 ----------------------------------------------------------------------------------------------------------------------------------------------------------
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_echo-1_PD+T2w.json   |  114 
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_echo-1_PD+T2w.nii.gz |34146 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_echo-2_PD+T2w.json   |  114 
 sub-qa/anat/sub-qa_acq-tseXtraX3mm_echo-2_PD+T2w.nii.gz |32279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 sub-qa/sub-qa_scans.tsv                                 |    4 
 11 files changed, 66674 insertions(+), 66704 deletions(-)
heudiconv/utils/test-compare-two-versions.sh heudiconv{-master,-424} --bids -  62.28s user 1.56s system 97% cpu 1:05.34 total
so if I rename `_PD+T2w` into current working proposal of BEP001 `_MESE` it would become even 1 step closer to being correct. I guess that data (available via datalad from http://datasets.datalad.org/?dir=/dicoms/dartmouth-phantoms/bids_test6-PD+T2w) could provide a simple regression test for this branch.

@yarikoptic
Copy link
Member

@pvelasco thanks again for sharing the data! Unfortunately I took a shortcut and didn't look into how to make peace with box.com special remote, so just downloaded "manually" and shared those files now from http://datasets.datalad.org/?dir=/dicoms/velasco/SWIdataset (AKA ///dicoms/velasco/SWIdataset for datalad). If you would be interested, here is the commands you would need to run to make that repository "autoenabled" within your original repo on github, so people could datalad install it from github (not from datasets.datalad.org) wouldn't need to add that remote manually. Assuming you have your github as "origin" remote:

# intricacy of git-annex -- to register "git" special remote it needs to have it as a regular git remote
# but name should be different
git remote add --fetch datasets.datalad.org_ http://datasets.datalad.org/dicoms/velasco/SWIdataset/.git

# we register that remote as a special remote in git annex with autoenable
git annex initremote datasets.datalad.org type=git location=http://datasets.datalad.org/dicoms/velasco/SWIdataset/.git autoenable=true

# optional - get rid of that dummy
git remote remove datasets.datalad.org_

# you publish to github updated git-annex branch with new information about files availability
git push origin git-annex

and then people would be able to datalad install -g http://datasets.datalad.org/dicoms/velasco/SWIdataset and get all data from datasets.datalad.org.

@yarikoptic yarikoptic mentioned this pull request Mar 29, 2020
@yarikoptic
Copy link
Member

I have filed #435 which is the rebased version of this PR. Since our test coverage is not really covering those use cases - I didn't want to just force push. @tsalo - please review #435 and see if I've not screwed up anything, and then I will force push here.
I have tested on some local cases

case 1: PD+T1w
$> heudiconv/utils/test-compare-two-versions.sh heudiconv{-master,} --bids -f reproin --files /home/yoh/datalad/dicoms/dartmouth-phantoms/bids_test6-PD+T2w/005-anat-PD+T2w_acq-tse-tra-3mm    
Running /home/yoh/picts/mris/heudiconv-master/venvs/dev3/bin/heudiconv with log in compare-versions/v0.7.0-24-g7d2c526.log
Running /home/yoh/picts/mris/heudiconv/venvs/dev3/bin/heudiconv with log in compare-versions/v0.7.0-34-g15abe59.log
Results differ: /home/yoh/picts/mris/compare-versions/v0.7.0-24-g7d2c526-v0.7.0-34-g15abe59.diff
 anat/sub-qa_acq-tseXtraX3mm_PD+T2w1.json         |  114 
 anat/sub-qa_acq-tseXtraX3mm_PD+T2w1.nii.gz       |34146 --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 anat/sub-qa_acq-tseXtraX3mm_PD+T2w2.json         |  114 
 anat/sub-qa_acq-tseXtraX3mm_PD+T2w2.nii.gz       |32279 ----------------------------------------------------------------------------------------------------------------------------------------------------------------
 anat/sub-qa_acq-tseXtraX3mm_echo-1_PD+T2w.json   |  114 
 anat/sub-qa_acq-tseXtraX3mm_echo-1_PD+T2w.nii.gz |34146 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 anat/sub-qa_acq-tseXtraX3mm_echo-2_PD+T2w.json   |  114 
 anat/sub-qa_acq-tseXtraX3mm_echo-2_PD+T2w.nii.gz |32279 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 sub-qa_scans.tsv                                 |    4 
case 2: multi-echo with sbfref
$> heudiconv/utils/test-compare-two-versions.sh heudiconv{-master,} --random-seed 1 --bids -f reproin --files dbic-misc/1075_sequence/sourcedata/sub-sid001567
Running /home/yoh/picts/mris/heudiconv-master/venvs/dev3/bin/heudiconv with log in compare-versions/v0.7.0-24-g7d2c526.log
Running /home/yoh/picts/mris/heudiconv/venvs/dev3/bin/heudiconv with log in compare-versions/v0.7.0-34-g15abe59.log
Results differ: /home/yoh/picts/mris/compare-versions/v0.7.0-24-g7d2c526-v0.7.0-34-g15abe59.diff
 filegroup.json | 2144 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------------------------------------------------------------------
 1 file changed, 1072 insertions(+), 1072 deletions(-)
and seems to work as expected. I will later see if I could use @pvelasco 's dataset .

@yarikoptic
Copy link
Member

FWIW now there is a simplistic T1w uncombined: http://datasets.datalad.org/?dir=/dicoms/dartmouth-phantoms/ANAT-T1W_UNCOMBINED-20200330 and also a collection of sequences (EPI and DWI are multiband but no split channels) from philips: http://datasets.datalad.org/?dir=/dicoms/umass-philips/Zheng_Test_03302020 . I think I might see if any of those comes handy for establishing at least some basic testing

@yarikoptic yarikoptic force-pushed the ref/modularize-multifile-renamers branch from 788bd96 to a2d9099 Compare April 1, 2020 15:48
@yarikoptic
Copy link
Member

FWIW, force pushed rebased on current master.

@tsalo
Copy link
Member Author

tsalo commented Apr 2, 2020

@yarikoptic Is this PR waiting on anything (e.g., bids-standard/bids-specification#370, more testing, etc.) before it can be merged?

@yarikoptic
Copy link
Member

@yarikoptic Is this PR waiting on anything (e.g., bids-standard/bids-specification#370, more testing, etc.) before it can be merged?

yeah, both:

  • channel -- [ENH] Add coil entity for uncombined MR data bids-standard/bids-specification#425 . even we ourselves I believe have not finalized on _channel- vs _ch- and either to use <label> or <index>. My favorite would be _ch-<label>. Otherwise we would need to keep changing code here to account for what we decide next there.
  • test. At least some basic one. I have published two datasets from @pvelasco and myself which we could use, but never got to code for it.

@yarikoptic
Copy link
Member

FWIW, test is IMHO more important ;) since per channel data is quite rare, I guess we could fix up later after merge.

bpinsard added a commit to UNFmontreal/heudiconv that referenced this pull request Apr 22, 2020
bpinsard added a commit to UNFmontreal/heudiconv that referenced this pull request Apr 22, 2020
@tsalo
Copy link
Member Author

tsalo commented May 7, 2020

@yarikoptic I could excise the channel stuff and save it for another PR since it's built off of the modularization, if that makes things easier.

Copy link
Member

@yarikoptic yarikoptic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's proceed with current state and may be excise or tune up as a last resort?
may be only change _channel to _ch since that is what we seems converged in that bids spec PR.
I have left bunch of nit-picking pythonic comments, hopefully they would be taken as of value and not of pain ;) but the largest one is that we would (possibly later) need to RF it again to avoid now nicely modularized into functions code duplication ;) meanwhile we could at least make it a bit shorter and thus even easier to read. Also since now there are nice functions, would be nice to have unittest(s) - at least for one of them so later we could assure the same behavior.

channel_names.append(metadata.get('CoilString', None))
image_types.append(metadata.get('ImageType', None))
echo_times = [v for v in echo_times if v]
echo_times = sorted(list(set(echo_times)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need for an explicit list() here -- sorted would work fine on any iterator AFAIK

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so could be one line

echo_times = sorted(set(v for v in echo_times if v))

IMHO still readable.


# Determine if data are complex (magnitude + phase)
magnitude_found = any(['M' in it for it in image_types])
phase_found = any(['P' in it for it in image_types])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor, also no need for an explicit [] list -- any('P' in it for it in image_types) would be just fine since you would just get an iterator comprehension now

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually since we do not care about those _found variables and actually about retaining the list of lists for image_types, and at the end just getting sets of values:

  • start with sets, not lists and either .add or .update (for image_types) - would remove need for all the sorted and explicit set.
  • And then here it would just become a
    is_multiecho = len(filter(bool, echo_times)) > 1 (or is_multiecho = len(v for v in echo_times if v)) > 1 if current form)
    is_complex = 'M' in image_types and 'P' in image_types

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like it has to be is_multiecho = len(set(filter(bool, echo_times))) > 1, but it's now in there and looks much cleaner.

if bids_file and is_uncombined:
this_prefix_basename = update_uncombined_name(
bids_meta, this_prefix_basename, channel_names
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not to have a single if bids_file: above all of the possible cases? IMHO due to nice variable names comments aren't even needed ;)

this_prefix_basename = this_prefix_basename.replace(
label, "_channel-%s%s" % (channel_number, label)
)
break
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate lines got me thinking -- wouldn't it be the same as

    if (label == filetype) or (label in this_prefix_basename):
            this_prefix_basename = this_prefix_basename.replace(
                label, "_channel-%s%s" % (channel_number, label)
            )
            break

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and it feels like all 3 functions are largely duplicates of the same logic... ok -- I could look into refactoring it all later. As it came up in #441 we seems to not even have a convenience construct and/or helpers to decompose / recompose (in the standard order defined in BIDS entities table) a standard bids filename into key: value pairs. I feel that we need a basic class such as BIDSFile (possibly at large implemented by pybids probably but that one is too heavy ATM to depend on IMHO), which would have a simple dict interface for all the keys, and then additional attribute (e.g. here you call it .filetype), and then __str__ would just return properly formatted filename. Then checks would become much easier etc. but ok -- we could do that later I guess

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a dedicated unit-test for at least one of these functions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

tsalo added 3 commits May 8, 2020 13:26
- Some refactoring of name-updaters.
- Add tests for name-updaters.
…s' into ref/modularize-multifile-renamers

# Conflicts:
#	CHANGELOG.md
#	docs/conf.py
#	docs/installation.rst
#	docs/usage.rst
#	heudiconv/convert.py
#	heudiconv/info.py
@yarikoptic
Copy link
Member

Now there is even more tests than before -- so must not be more wrong ! ;) Let's proceed but I just filed an issue #452 so we do not forget about a helper for BIDS files parsing/tune up. Implementation will be used for changes here and elsewhere in the code and #441

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Converting uncombined (channel-level) data
3 participants